Incorporating AI into your Workflow

A Practical Guide for Economists

Gabriel Lade

The Ohio State University

Part I

AI in Your Research

Think, Pair, Share

For the following use cases, discuss whether you think AI is appropriate to use as part of your research:

  • Developing research questions/ideas
  • Coding
  • Theoretical modeling and derivations
  • Editing
  • Writing
  • Summarizing (post-publication)

Part II

My Thoughts on AI

Two observations and a question

  1. AI has made it virtually costless to write grammatically correct prose with no spelling errors.

  2. AI has given everyone a (near) Ph.D.-level coding assistant and data analyst.



What is the equilibrium response for economics job market candidates?

A few thoughts

  • The average quality (and expectations) of a JMP will be higher every year
  • Expectations for more complex analyses will increase
  • Bad writing will likely become a very bad signal
  • Boring/generic writing will signal overreliance on AI
  • Interpersonal skills will become more critical

Today’s focus: Building an AI-augmented research infrastructure

Finding the Balance

1. Learn the fundamentals — You can’t evaluate code you don’t understand. AI will confidently give you wrong answers—you need to catch them. Think of AI as a fast RA who occasionally hallucinates.

2. But don’t fall behind — Your peers are using these tools. The productivity gap compounds quickly. “I prefer the old way” is not a competitive strategy.

3. The old system was imperfect — Research errors happened before AI—and were caught late or not at all. Better workflows + AI review can catch errors earlier.

The goal isn’t “AI vs. no AI”—it’s building systems that make research more robust, faster, and more replicable.

The old system wasn’t a gold standard

Case 1: Reinhart & Rogoff (2010) — An Excel formula didn’t include 5 rows of data. Paper claimed high debt causes -0.1% growth; corrected: +2.2% growth. Used to justify global austerity. Found by a grad student, 3 years later.

Case 2: Deschênes & Greenstone (2007 AER) — Coding errors and missing weather data in climate-agriculture analysis. Paper claimed climate change would increase ag profits; corrected results showed the opposite. Found 5 years later via replication.

Part II

Building Your Stack

1. Use a cloud platform + clear folder structure

Work from OneDrive, Dropbox, or Google Drive—not your local machine alone. A consistent folder structure makes AI tools more effective because they can understand your project organization. This also enables collaboration and automatic backup simultaneously.

project-folder/
├── data/
│   ├── raw/
│   └── clean/
├── code/
│   ├── 01-clean.R
│   ├── 02-analysis.R
│   └── 03-figures.R
├── output/
├── paper/
└── README.md

2. Use Git/GitHub for version control

Why Git?

  • Track every change to your code
  • Revert to any previous version
  • See exactly what changed and when
  • Collaborate without overwriting

Why GitHub?

  • Cloud backup of your code history
  • Share and collaborate easily
  • AI tools integrate directly with repos
  • Replication packages ready to share

2. Use Git/GitHub for version control

No more analysis_v3_final_FINAL2.R — Git tracks it all.


git diff analysis.R
- model <- lm(y ~ x1 + x2, data = df)
+ model <- lm(y ~ x1 + x2 + x1:x2, data = df)  # added interaction


Confused on how to set up Git and GitHub? Ask AI!

3. Use AI coding tools: Claude Code vs. Cursor

Claude Code

What: Terminal-based, agentic coding assistant

Best for: R and data analysis workflows, file operations & project setup, autonomous multi-step tasks, Git integration

Runs from command line, can execute code and modify files directly.

Cursor

What: AI-powered code editor (VS Code fork)

Best for: Iterative code editing, larger software projects, inline suggestions while typing, tab completion on steroids

Full IDE experience with AI built in.

4. Leveling Up: Structured Workflows

Basic Claude Code is powerful, but can be unpredictable.

Structured workflows add:

  • CLAUDE.md — Project config Claude reads every session
  • Plan-first approach — Review the approach before implementation
  • Quality gates — Standards enforced before committing


Based on workflows from researchers at Cornell (Rudik) and Emory (Sant’Anna)

The .claude/ Folder Structure

my-project/
├── CLAUDE.md                    # Project "constitution"
├── .claude/
│   └── rules/
│       ├── r-code-conventions.md    # R coding standards
│       ├── quality-gates.md         # 80/90/95 scoring
│       └── plan-first-workflow.md   # Plan → Approve → Implement
├── code/
├── data/
│   ├── raw/
│   └── clean/
└── output/

CLAUDE.md is read at the start of every session — Claude knows your conventions automatically.

The Plan-First Workflow

The pattern:

  1. Describe — What you want to accomplish
  2. Claude plans — Proposes approach, files, packages
  3. You review — Approve or refine
  4. Claude implements — Works autonomously after approval
  5. Verify & commit — Quality gates checked before commit

Think of it like hiring a contractor: discuss the plan, approve it, let them work, review the result.

Quality Gates

Every file gets a score (0-100). Scores below threshold block the action:

Score Meaning
80+ Commit allowed
90+ Ready for PR/sharing
95+ Publication quality

What gets scored:

  • R scripts: Structure, documentation, reproducibility
  • Figures: Clarity, labeling, consistency
  • Commit messages: Descriptive and meaningful

5. Write papers in Overleaf + GitHub

  • Overleaf = collaborative LaTeX editing in the browser
  • GitHub sync = version control for your paper
  • AI tools can help draft, edit, and format LaTeX
  • Figures from code → paper in one pipeline

The workflow:

Code (R/Python) → Figures & Tables → GitHub Repo → Overleaf Paper

One integrated system: Code changes automatically flow through to your paper.

Cautions & Pitfalls

  1. AI can hallucinate packages and functions — It will confidently suggest code using libraries that don’t exist. Always run and verify.

  2. Don’t let AI do your economic thinking — It’s a tool, not a co-author. Identification, intuition, and interpretation are yours.

  3. Always test AI-generated code — Run it. Check edge cases. Verify results make sense.

  4. Be thoughtful about sensitive data — Know what data you’re sending to AI services. Check your IRB and data use agreements.

  5. Don’t skip the planning step — Review Claude’s plan before implementation. Easier to fix a wrong plan than wrong code.

More Resources

Resource Description
Kevin Bryan’s “Tech Stack” Modern research tools for economists
Fernández-Villaverde’s Git Tutorial Git tutorial for academic research
Claude Code Docs Official Claude Code documentation
Cursor AI-first code editor
Rudik’s workflow Make + R structured workflow
Sant’Anna’s workflow LaTeX + R + quality gates

Questions?